Last Update: 2025/3/26
MiniMax Chat Completion API
The MiniMax Chat Completion API allows you to generate conversational responses using OpenAI's SDK. This document provides an overview of the API endpoints, request parameters, and response structure.
Endpoint
POST https://platform.llmprovider.ai/v1/chat/completions
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Request Body
The request body should be a JSON object with the following parameters:
Parameter | Type | Description |
---|---|---|
model | string | The model to use (e.g., abab5.5-chat ). |
messages | array | A list of message objects representing the conversation history. |
stream | boolean | (Optional) Whether to stream the response as it is generated. |
max_tokens | integer | (Optional) The maximum number of tokens to generate. |
temperature | number | (Optional) Sampling temperature, between 0 and 1. |
top_p | number | (Optional) Nucleus sampling probability, between 0 and 1. |
n | integer | (Optional) Number of completions to generate for each prompt. |
stop | array | (Optional) Up to 4 sequences where the API will stop generating further tokens. |
presence_penalty | number | (Optional) Penalty for new tokens based on their presence in the text so far. |
frequency_penalty | number | (Optional) Penalty for new tokens based on their frequency in the text so far. |
Example Request
{
"model": "abab5.5-chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Tell me a joke."
}
],
"max_tokens": 50,
"temperature": 0.7
}
Response Body
The response body will be a JSON object containing the generated completions and other metadata.
Field | Type | Description |
---|---|---|
id | string | Unique identifier for the completion. |
object | string | The type of object returned, usually chat.completion . |
created | integer | Timestamp of when the completion was created. |
model | string | The model used for the completion. |
choices | array | A list of generated completion choices. |
usage | object | Token usage statistics for the request. |
Example Response
{
"id": "cmpl-6aF1d2e3G4H5I6J7K8L9M0N1",
"object": "chat.completion",
"created": 1678491234,
"model": "abab5.5-chat",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Why don't scientists trust atoms? Because they make up everything!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 16,
"total_tokens": 26
}
}
Example Request
- Shell
- nodejs
- python
curl -X POST https://platform.llmprovider.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "abab5.5-chat",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/chat/completions';
const data = {
conversation_id: '',
model: 'abab5.5-chat',
messages: [
{
role: 'user',
content: 'Hello!'
}
]
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, { headers })
.then(response => {
console.log('Response:', response.data);
})
.catch(error => {
console.error('Error:', error);
});
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/chat/completions'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'model': 'abab5.5-chat',
'messages': [
{
'role': 'user',
'content': 'Hello!'
}
]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
print('Response:', response.json())
else:
print('Error:', response.status_code, response.text)
For any questions or further assistance, please contact us at [email protected].